首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1365篇
  免费   67篇
  国内免费   2篇
工业技术   1434篇
  2023年   11篇
  2022年   14篇
  2021年   68篇
  2020年   29篇
  2019年   30篇
  2018年   48篇
  2017年   38篇
  2016年   48篇
  2015年   32篇
  2014年   61篇
  2013年   105篇
  2012年   104篇
  2011年   113篇
  2010年   80篇
  2009年   93篇
  2008年   63篇
  2007年   69篇
  2006年   64篇
  2005年   32篇
  2004年   45篇
  2003年   31篇
  2002年   36篇
  2001年   20篇
  2000年   18篇
  1999年   22篇
  1998年   16篇
  1997年   18篇
  1996年   18篇
  1995年   15篇
  1994年   9篇
  1993年   15篇
  1992年   8篇
  1991年   4篇
  1990年   4篇
  1989年   6篇
  1988年   2篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1983年   3篇
  1982年   2篇
  1981年   3篇
  1980年   6篇
  1979年   4篇
  1977年   2篇
  1976年   2篇
  1975年   2篇
  1973年   2篇
  1968年   2篇
  1957年   1篇
排序方式: 共有1434条查询结果,搜索用时 465 毫秒
41.
Air pollution has a negative impact on human health. For this reason, it is important to correctly forecast over-threshold events to give timely warnings to the population. Nonlinear models of the nonlinear autoregressive with exogenous variable (NARX) class have been extensively used to forecast air pollution time series, mainly using artificial neural networks (NNs) to model the nonlinearities. This work discusses the possible advantages of using polynomial NARX instead, in combination with suitable model structure selection methods. Furthermore, a suitably weighted mean square error (MSE) (one-step-ahead prediction) cost function is used in the identification/learning process to enhance the model performance in peak estimation, which is the final purpose of this application. The proposed approach is applied to ground-level ozone concentration time series. An extended simulation analysis is provided to compare the two classes of models on a selected case study (Milan metropolitan area) and to investigate the effect of different weighting functions in the identification performance index. Results show that polynomial NARX are able to correctly reconstruct ozone concentrations, with performances similar to NN-based NARX models, but providing additional information, as, e.g., the best set of regressors to describe the studied phenomena. The simulation analysis also demonstrates the potential benefits of using the weighted cost function, especially in increasing the reliability in peak estimation.  相似文献   
42.
Focus stacking is a technique of photomacrography and photomicrography that produces images of small three-dimensional subjects with an arbitrarily high depth of field, unencumbered by diffraction, by combining the in-focus portions of a stack of images of the subject recorded at different focal planes. Software packages are available for postprocessing an image stack into the final image, but the stack images are normally shot either with (typically expensive) automated equipment or by a manual, time-consuming, and error-prone procedures. This paper discusses the construction of an autonomous stacker with inexpensive preassembled electronics and a moderate amount of mechanical construction, and its C++ software.  相似文献   
43.
We present a method for producing quad‐dominant subdivided meshes, which supports both adaptive refinement and adaptive coarsening. A hierarchical structure is stored implicitly in a standard half‐edge data structure, while allowing us to efficiently navigate through the different level of subdivision. Subdivided meshes contain a majority of quad elements and a moderate amount of triangles and pentagons in the regions of transition across different levels of detail. Topological LOD editing is controlled with local conforming operators, which support both mesh refinement and mesh coarsening. We show two possible applications of this method: we define an adaptive subdivision surface scheme that is topologically and geometrically consistent with the Catmull–Clark subdivision; and we present a remeshing method that produces semi‐regular adaptive meshes.  相似文献   
44.
Despite the ability of current GPU processors to treat heavy parallel computation tasks, its use for solving medical image segmentation problems is still not fully exploited and remains challenging. A lot of difficulties may arise related to, for example, the different image modalities, noise and artifacts of source images, or the shape and appearance variability of the structures to segment. Motivated by practical problems of image segmentation in the medical field, we present in this paper a GPU framework based on explicit discrete deformable models, implemented over the NVidia CUDA architecture, aimed for the segmentation of volumetric images. The framework supports the segmentation in parallel of different volumetric structures as well as interaction during the segmentation process and real-time visualization of the intermediate results. Promising results in terms of accuracy and speed on a real segmentation experiment have demonstrated the usability of the system.  相似文献   
45.
We study the problem of guaranteeing correct execution semantics in parallel implementations of logic programming languages in presence of built-in constructs that are sensitive to order of execution. The declarative semantics of logic programming languages permit execution of various goals in any arbitrary order (including in parallel). However, goals corresponding to extra-logical built-in constructs should respect the sequential order of execution to ensure correct semantics. Ensuring this correctness in presence of such built-in constructs, while efficiently exploiting maximum parallelism, is a difficult problem. In this paper, we propose a formalization of this problem in terms of operations on dynamic trees. This abstraction enables us to: (i) show that existing schemes to handle order-sensitive computations used in current parallel systems are sub-optimal; (ii) develop a novel, optimal scheme to handle order-sensitive goals that requires only a constant time overhead per operation. While we present our results in the context of logic programming, they will apply equally well to most parallel non-deterministic systems. Received: 20 April 1998 / 3 April 2000  相似文献   
46.
In this paper, we study the relation among Answer Set Programming (ASP) systems from a computational point of view. We consider smodels, dlv, and cmodels ASP systems based on stable model semantics, the first two being native ASP systems and the last being a SAT-based system. We first show that smodels, dlv, and cmodels explore search trees with the same branching nodes (assuming, of course, a same branching heuristic) on the class of tight logic programs. Leveraging on the fact that SAT-based systems rely on the deeply studied Davis–Logemann–Loveland (dll) algorithm, we derive new complexity results for the ASP procedures. We also show that on nontight programs the SAT-based systems are computationally different from native procedures, and the latter have computational advantages. Moreover, we show that native procedures can guarantee the “correctness” of a reported solution when reaching the leaves of the search trees (i.e., no stability check is needed), while this is not the case for SAT-based procedures on nontight programs. A similar advantage holds for dlv in comparison with smodels if the “well-founded” operator is disabled and only Fitting’s operator is used for negative inferences. We finally study the “cost” of achieving such advantages and comment on to what extent the results presented extend to other systems.  相似文献   
47.
48.
Bayesian networks are models for uncertain reasoning which are achieving a growing importance also for the data mining task of classification. Credal networks extend Bayesian nets to sets of distributions, or credal sets. This paper extends a state-of-the-art Bayesian net for classification, called tree-augmented naive Bayes classifier, to credal sets originated from probability intervals. This extension is a basis to address the fundamental problem of prior ignorance about the distribution that generates the data, which is a commonplace in data mining applications. This issue is often neglected, but addressing it properly is a key to ultimately draw reliable conclusions from the inferred models. In this paper we formalize the new model, develop an exact linear-time classification algorithm, and evaluate the credal net-based classifier on a number of real data sets. The empirical analysis shows that the new classifier is good and reliable, and raises a problem of excessive caution that is discussed in the paper. Overall, given the favorable trade-off between expressiveness and efficient computation, the newly proposed classifier appears to be a good candidate for the wide-scale application of reliable classifiers based on credal networks, to real and complex tasks.  相似文献   
49.
50.
An interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics-based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two-handed manipulation registered with head-tracked stereo viewing. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand-eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号